Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
ABSTRACT Traditional software reliability growth models (SRGM) characterize defect discovery with the Non‐Homogeneous Poisson Process (NHPP) as a function of testing time or effort. More recently, covariate NHPP SRGM models have substantially improved tracking and prediction of the defect discovery process by explicitly incorporating discrete multivariate time series on the amount of each underlying testing activity performed in successive intervals. Both classes of NHPP models with and without covariates are parametric in nature, imposing assumptions on the defect discovery process, and, while neural networks have been applied to SRGM models without covariates, no such studies have been applied in the context of covariate SRGM models. Therefore, this paper assesses the effectiveness of neural networks in predicting the software defect discovery process, incorporating covariates. Three types of neural networks are considered, including (i) recurrent neural networks (RNNs), (ii) long short‐term memory (LSTM), and (iii) gated recurrent unit (GRU), which are then compared with covariate models to validate tracking and predictive accuracy. Our results suggest that GRU achieved better overall goodness‐of‐fit, such as approximately 3.22 and 1.10 times smaller predictive mean square error, and 5.33 and 1.22 times smaller predictive ratio risk in DS1G and DS2G data sets, respectively, compared to covariate models when of the data is used for training. Moreover, to provide an objective comparison, three different proportions for training data splits were employed to illustrate the advancements between the top‐performing covariate NHPP model and the neural network, in which GRU illustrated a better performance over most of the scenarios. Thus, the neural network model with gated recurrent units may be a suitable alternative to track and predict the number of defects based on covariates associated with the software testing process.more » « lessFree, publicly-accessible full text available September 8, 2026
-
Abstract Advances in machine learning (ML) have led to applications in safety‐critical domains, including security, defense, and healthcare. These ML models are confronted with dynamically changing and actively hostile conditions characteristic of real‐world applications, requiring systems incorporating ML to be reliable and resilient. Many studies propose techniques to improve the robustness of ML algorithms. However, fewer consider quantitative techniques to assess changes in the reliability and resilience of these systems over time. To address this gap, this study demonstrates how to collect relevant data during the training and testing of ML suitable for the application of software reliability, with and without covariates, and resilience models and the subsequent interpretation of these analyses. The proposed approach promotes quantitative risk assessment of ML technologies, providing the ability to track and predict degradation and improvement in the ML model performance and assisting ML and system engineers with an objective approach to compare the relative effectiveness of alternative training and testing methods. The approach is illustrated in the context of an image recognition model, which is subjected to two generative adversarial attacks and then iteratively retrained to improve the system's performance. Our results indicate that software reliability models incorporating covariates characterized the misclassification discovery process more accurately than models without covariates. Moreover, the resilience model based on multiple linear regression incorporating interactions between covariates tracks and predicts degradation and recovery of performance best. Thus, software reliability and resilience models offer rigorous quantitative assurance methods for ML‐enabled systems and processes.more » « less
-
null (Ed.)Recent research applies soft computing techniques to fit software reliability growth models. However, runtime performance and the distribution of the distance from an optimal solution over multiple runs must be explicitly considered to justify the practical utility of these approaches, promote comparison, and support reproducible research. This paper presents a meta-optimization framework to design stable and efficient multi-phase algorithms for fitting software reliability growth models. The approach combines initial parameter estimation techniques from statistical algorithms, the global search properties of soft computing, and the rapid convergence of numerical methods. Designs that exhibit the best balance between runtime performance and accuracy are identified. The approach is illustrated through nonhomogeneous Poisson process and covariate software reliability growth models, including a cross-validation step on data sets not used to identify designs. The results indicate the nonhomogeneous Poisson process model considered is too simple to benefit from soft computing because it incurs additional runtime with no increase in accuracy attained. However, a multi-phase design for the covariate software reliability growth model consisting of the bat algorithm followed by a numerical method achieves better performance and converges consistently, compared to a numerical method only. The proposed approach supports higher dimensional covariate software reliability growth model fitting suitable for implementation in a tool.more » « less
-
null (Ed.)A large number of software reliability growth models have been proposed in the literature. Many of these models have also been the subject of optimization problems, including the optimal release problem in which a decision-maker seeks to minimize cost by balancing the cost of testing with field failures. However, the majority of these optimal release formulations are either unused or untested. In many cases, researchers derive expressions and apply them to the complete set of failure data in order to identify the time at which cost was minimized, but this is clearly unusable, since it is not possible to go back in time to make a release decision. The only other implicit strategy implied by these optimal release formulations is to refit a model every time a failure occurs and to assess if the optimal release time has past or if additional testing should be performed.more » « less
-
null (Ed.)Traditional software reliability growth models only consider defect discovery data, yet the practical concern of software engineers is the removal of these defects. Most attempts to model the relationship between defect discovery and resolution have been restricted to differential equation-based models associated with these two activities. However, defect tracking databases offer a practical source of information on the defect lifecycle suitable for more complete reliability and performance models. This paper explicitly connects software reliability growth models to software defect tracking. Data from a NASA project has been employed to develop differential equation-based models of defect discovery and resolution as well as distributional and Markovian models of defect resolution. The states of the Markov model represent thirteen unique stages of the NASA software defect lifecycle. Both state transition probabilities and transition time distributions are computed from the defect database. Illustrations compare the predictive and computational performance of alternative approaches. The results suggest that the simple distributional approach achieves the best tradeoff between these two performance measures, but that enhanced data collection practices could improve the utility of the more advanced approaches and the inferences they enable.more » « less
-
Summary Many non‐homogeneous Poisson process software reliability growth models (SRGM) are characterized by a single continuous curve. However, failures are driven by factors such as the testing strategy and environment, integration testing and resource allocation, which can introduce one or more changepoint into the fault detection process. Some researchers have proposed non‐homogeneous Poisson process SRGM, but only consider a common failure distribution before and after changepoints. This paper proposes a heterogeneous single changepoint framework for SRGM, which can exhibit different failure distributions before and after the changepoint. Combinations of two simple and distinct curves including an exponential and S‐shaped curve are employed to illustrate the concept. Ten data sets are used to compare these heterogeneous models against their homogeneous counterparts. Experimental results indicate that heterogeneous changepoint models achieve better goodness‐of‐fit measures on 60% and 80% of the data sets with respect to the Akaike information criterion and predictive sum of squares measures.more » « less
An official website of the United States government

Full Text Available